我们提供了最大的公开词典,其中包括贝叶斯改进的姓氏地理编码(BISG),以归纳种族和种族的目的。词典基于六个南部州的选民档案,这些档案是在选民注册后收集自我报告的种族数据的。我们的数据涵盖了比任何可比数据集更大的名称范围,其中包含大约100万个名字,110万个中间名和140万个姓氏。个人被归类为五个相互排斥的种族和种族 - 白人,黑人,西班牙裔,亚洲和其他种族 - 每个词典中的每个名称都为种族/种族计数提供了名称。然后可以按列表或列的标准化计数,以获取给定名称或名称的种族的条件概率。然后可以将这些条件概率部署在数据分析任务中,以实现真相和种族数据的基础分析任务。
translated by 谷歌翻译
个人种族和种族的预测在种族差异研究中起着重要作用。贝叶斯改进的姓氏地理编码(BISG)依赖于详细的人口普查信息,已成为该预测任务的主要方法。不幸的是,BISG遭受了两个数据问题。首先,人口普查通常在这些组成员居住的位置的少数群体中含量为零。其次,人口普查数据中缺少许多姓氏 - 尤其是少数民族的姓氏。我们引入了完全贝叶斯改进的姓氏地理编码(FBISG)方法,该方法可以通过扩展BISG方法的天真贝叶斯推断来解决人口普查测量误差。我们还使用了从六个有自我报告的种族的南部州的选民文件中获取的最后,第一个和中间名的其他数据。我们的经验验证表明,FBISG方法论和名称补充剂可显着提高种族归纳的准确性,尤其是对于少数民族而言。
translated by 谷歌翻译
回归不连续性设计(RDDS)已成为因果推理最广泛使用的准实验工具之一。他们依赖于其无法操纵运行变量的关键假设 - 在实践中经常违反的假设,危及点识别。在本文中,我们介绍了一种新的方法,可以在夏普和模糊的RDD中兴趣的因果参数提供部分识别界限。该方法首先使用对运行变量的未操纵密度的日志凹陷假设估计样本中的操纵器数。然后,当我们从数据中删除该数量的点以及快速计算方法时,它会导出最佳和最坏情况的界限。我们将此程序应用于阿布扎比血库的献血数据集,以获得捐助者延期对未来志愿行为的因果效果。我们发现,尽管在数据中进行了重大操纵,但我们能够检测到传统方法,例如甜甜圈RDD,失败的因果效果。
translated by 谷歌翻译
The demand of high-resolution video contents has grown over the years. However, the delivery of high-resolution video is constrained by either computational resources required for rendering or network bandwidth for remote transmission. To remedy this limitation, we leverage the eye trackers found alongside existing augmented and virtual reality headsets. We propose the application of video super-resolution (VSR) technique to fuse low-resolution context with regional high-resolution context for resource-constrained consumption of high-resolution content without perceivable drop in quality. Eye trackers provide us the gaze direction of a user, aiding us in the extraction of the regional high-resolution context. As only pixels that falls within the gaze region can be resolved by the human eye, a large amount of the delivered content is redundant as we can't perceive the difference in quality of the region beyond the observed region. To generate a visually pleasing frame from the fusion of high-resolution region and low-resolution region, we study the capability of a deep neural network of transferring the context of the observed region to other regions (low-resolution) of the current and future frames. We label this task a Foveated Video Super-Resolution (FVSR), as we need to super-resolve the low-resolution regions of current and future frames through the fusion of pixels from the gaze region. We propose Cross-Resolution Flow Propagation (CRFP) for FVSR. We train and evaluate CRFP on REDS dataset on the task of 8x FVSR, i.e. a combination of 8x VSR and the fusion of foveated region. Departing from the conventional evaluation of per frame quality using SSIM or PSNR, we propose the evaluation of past foveated region, measuring the capability of a model to leverage the noise present in eye trackers during FVSR. Code is made available at https://github.com/eugenelet/CRFP.
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
Automated slicing aims to identify subsets of evaluation data where a trained model performs anomalously. This is an important problem for machine learning pipelines in production since it plays a key role in model debugging and comparison, as well as the diagnosis of fairness issues. Scalability has become a critical requirement for any automated slicing system due to the large search space of possible slices and the growing scale of data. We present Autoslicer, a scalable system that searches for problematic slices through distributed metric computation and hypothesis testing. We develop an efficient strategy that reduces the search space through pruning and prioritization. In the experiments, we show that our search strategy finds most of the anomalous slices by inspecting a small portion of the search space.
translated by 谷歌翻译
Identifying statistical regularities in solutions to some tasks in multi-task reinforcement learning can accelerate the learning of new tasks. Skill learning offers one way of identifying these regularities by decomposing pre-collected experiences into a sequence of skills. A popular approach to skill learning is maximizing the likelihood of the pre-collected experience with latent variable models, where the latent variables represent the skills. However, there are often many solutions that maximize the likelihood equally well, including degenerate solutions. To address this underspecification, we propose a new objective that combines the maximum likelihood objective with a penalty on the description length of the skills. This penalty incentivizes the skills to maximally extract common structures from the experiences. Empirically, our objective learns skills that solve downstream tasks in fewer samples compared to skills learned from only maximizing likelihood. Further, while most prior works in the offline multi-task setting focus on tasks with low-dimensional observations, our objective can scale to challenging tasks with high-dimensional image observations.
translated by 谷歌翻译
In this work, we identify elements of effective machine learning datasets in astronomy and present suggestions for their design and creation. Machine learning has become an increasingly important tool for analyzing and understanding the large-scale flood of data in astronomy. To take advantage of these tools, datasets are required for training and testing. However, building machine learning datasets for astronomy can be challenging. Astronomical data is collected from instruments built to explore science questions in a traditional fashion rather than to conduct machine learning. Thus, it is often the case that raw data, or even downstream processed data is not in a form amenable to machine learning. We explore the construction of machine learning datasets and we ask: what elements define effective machine learning datasets? We define effective machine learning datasets in astronomy to be formed with well-defined data points, structure, and metadata. We discuss why these elements are important for astronomical applications and ways to put them in practice. We posit that these qualities not only make the data suitable for machine learning, they also help to foster usable, reusable, and replicable science practices.
translated by 谷歌翻译
Intentionally crafted adversarial samples have effectively exploited weaknesses in deep neural networks. A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample such that its corresponding model output changes. These sensitivity attacks exploit the model's sensitivity toward task-irrelevant features. Another form of adversarial sample can be crafted via invariance attacks, which exploit the model underestimating the importance of relevant features. Previous literature has indicated a tradeoff in defending against both attack types within a strictly L_p bounded defense. To promote robustness toward both types of attacks beyond Euclidean distance metrics, we use metric learning to frame adversarial regularization as an optimal transport problem. Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
translated by 谷歌翻译
生成对抗网络(GAN)是用于复杂数据生成建模的广泛使用的工具。尽管取得了经验成功,但由于发电机和鉴别器的最低最大优化,对GAN的训练尚未完全理解。本文分析了这些关节动力学时,当真实样品以及生成的样品是离散的,有限的集合,并且鉴别器基于内核。引入了一个简单而表达的框架,用于分析培训,称为$ \ textit {隔离点模型} $。在提出的模型中,真实样品之间的距离大大超过了内核宽度,因此每个生成的点都受到最多一个真实点的影响。我们的模型可以精确地表征好和不良最小值的收敛条件。特别是,分析解释了两种常见的故障模式:(i)近似模式崩溃和(ii)差异。提供了可预测复制这些行为的数值模拟。
translated by 谷歌翻译